itp: a2z

[ITP: Programming A2Z] Final Project Progress

AI-defined text-based portrait

Since my thesis deals with portraiture and self-portraiture I thought it could be interesting to see if I could get the Replicate Llama chatbot to try to describe what I look like based on the text I’ve written on my website and these blog posts.

Here are a couple of snippets of my convo from last week in which I’ve highlighted the physical details:

1

2

Below are some more snapshots from another convo with the Llama. It felt like pulling teeth trying to get a physical description of myself; he was really fighting me on making any aesthetic assumptions. Eventually I got something pretty weird. The last two screenshots entail mostly other-worldly descriptions.

1

2

3

4

In some office hours, Dan told me that the bot wasn’t actually going to my website and reading the text, it was purely guessing the next best letter character based on its training data and our previous conversation … and then I realized that was what this whole class was about!!! So basically the bot it just predicting text and telling me what it thinks I want to hear. It’s not being “trained” on any of this text or even referring to it, it’s just making its best guess. Kinda ironic, but good that I finally understood this!

Either way, Mr. Llama is super complimentary when it comes to describing me and the convo really helped to inflate my ego. Below I categorized the descriptions based on physical element and I started creating a pencil sketch. Because the descriptions were really “out there” I tried to lean-in to a kind of deity or god-like imagery sometimes found in representations of Hindu gods. There’s some kind of critique in this work, like AI playing god or something?!

Here’s the final, painted, AI-collaboration portrait. I think I look pretty good!

Priyanka Phone

I also revisited my assignment from a couple of weeks ago where I was generating texts from my Dad. Previously, I had no visual elements for that p5 sketch so I wanted to create an interface that looked like my own phone. I drew up some assets using my iPad and Procreate and made the sketch look more like my phone home screen. Now the generated texts come in periodically like they do on my actual phone. You can try out this sketch for yourself here.

Trying to recreate my experience or relationship with my phone is interesting to me because in some ways it also feels like a self-portrait. These days our devices become extensions of ourselves. Honestly, my relationship with my phone seems so shallow because sometimes I’m doom scrolling for hours until I’m completely hollow and the only texts that really matter are the ones from my loved ones and DoorDash. Is everything a self portrait?!

Future to-do’s for this self portrait:

  • Make all the texts for Priyanka. For variety's sake, some of the options for the DoorDash context-free grammar are people other than me, which doesn’t make sense if I’m trying to simulate my own phone.

  • I was also given a really small screen from Cindy and I would totally love to make this a physical piece at some point.

  • I’d also really like to make an unconventional phone enclosure to house the screen and RaspPi running my sketch. I would like to explore my relationship with my phone more but at the moment it really feels like it makes my life worse. My attention span is shot. Any notification will completely rip my attention away from anything I’ve been doing or thinking. How could I convey that in a physical form?

  • Is the screen touch? Can the sketch do something when you click on a text? Does it make sound?

[ITP: Programming A2Z] Transformers and Final Project Ideas

Llama Llama Red Pajama

On the Dangers of Stochastic Parrots

  • Increasing the environmental and financial costs of these models doubly punishes marginalized communities that are least likely to benefit from the progress achieved by large LMs and most likely to be harmed by negative environmental consequences of its resource consumption.

  • A majority of cloud compute provider’s energy is not sourced from renewable sources and many energy sources in the world are not carbon neutral

    • Negative effects of climate change are impacting marginalized communities the most

    • Researchers should prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources

  • Using the “Common Crawl” dataset

    • In accepting large amounts of web text as ‘representative’ of ‘all’ humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality

    • Social movements which are poorly documented and which do not receive significant media attention will not be captured at all. Media coverage can fail to cover protest events and social movements and can distort events that challenge state power

  • Documentation debt: datasets are undocumented and too large to document post hoc. Undocumented training data perpetuates harm without recourse.

  • Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.

  • Disseminating texts generated by LMs would mean more text in the world that reinforces and propagates stereotypes and problematic associations.

Foundation Model Transparency Index

  • FMTI evaluates 100 different aspects of transparency, from how a company builds a foundational model, how it works, and how it is used downstream

    • Questions involving intellectual property, labor practices, energy use, and bias

    • “In our view, if this rigorous process [Googling] didn’t find information about an indicator, then the company hasn’t been transparent about it.”

  • Less transparency makes it harder for other businesses to know if they can safely build applications that rely on commercial foundation models

Assignment

Chatbot with Llama using Replicate

I’ve been a little overwhelmed with all the things we’ve covered in the last weeks of class, so I thought it was best to try out a variety of things for this assignment. I started by downloading the ChatBot with Llama Replicate example. Replicate is a repository of different machine learning models that you can pay to use in projects. Below are some examples of featured models:

It took me a second to get my bearings, even with all the code handy. I didn’t realize that I don’t have node installed on my interim computer, so I needed to get all that setup. Then, on the CMD line, I navigated to where this example lives on my PC. The “npm install” command installed all the dependencies required by this example from the package.json file.

I tried running the example but I got the ‘Missing required parameter: auth’ error. Haha, I need an API token, duh! I got an invite from Shiffman to join the ITP organization set up for Replicate and generated my own token to use. I created a .env file in the example directory with this token. Then I was ready to npm run start and navigated to http://localhost:3000 in my browser.

My first convo with the chatbot got deleted by accident but here’s an overview of my second conversation with the Llama Chatbot:

1. Trying to get help with my final for this class … and questioning the chatbot on its breathing abilities.

2. Text-based portrait of Dan Shiffman

3. Text-based portrait of Priyanka Makin (based on my website)

4. Asking the chatbot if it could make any conclusions about my physical appearance from the text on my site. This questions killed it!

I am honestly surprised how much time I spent talking with this chatbot, I think I got pretty into it! Using other generative text tools, like chat-gpt, is not really part of my current workflow. I’m actually a bit of skeptic, so it is extra surprising to me how much I enjoyed talking to this llama.

Replicate and p5.js

Next I tried the Replicate p5.js example. This pretty much worked straight out of the box but I needed to wait a second for the responses to come back from the model.

I was actually texting one of my BFFs from my undergrad, Kendle, as I was doing this assignment. She studied something similar to ITP and is an AI hater like myself. Two of the prompts above actually came from her. The final prompt was “college roommates that are all grown up now” and this continually threw an NSFW warning and wouldn’t generate an image. Seems I’ve reached the limits of AI in a matter of 4 prompts…

Interactive Drawing with SketchRNN

It was pretty complicated, but I just HAD TO try out the Interactive Drawing Coding Challenge because visual, and doodle, and scribble is kinda my thing. And let me just say… I LOVED the sillyness of this tutorial. Meow meow meow meow!!!

Sketch RNN is a recurrent neural network that has been trained on the Quick, Draw! open source dataset from Google. It has many different models for different drawings and sketchRNN included as part of the ml5 library! I honestly just followed the video, lots of complicated math that I can’t explain really. The user draws the starting stroke as seed points for the chosen model and then the model finishes the drawing based on those points. There’s some smarty line simplification happening before the points are fed into the sketchRNN model. This example is super cool even though it was kinda over my head but my sketch is here.

Cat drawing when the pen isn’t being picked up

Nice cat drawing

Changed the model to face, green is what’s drawn by the user

Final Project Ideas

Or things I’ve started and haven’t really completed yet…

  • 2 bots that scream at each other. Scream into the digital void as much as you want! Cathartic?!

  • Dad (Markov) + DoorDash (CFG) text generator

    • Create a message app UI

    • P-comp-ify this? Run p5 sketch on a lil screen? Lil fake phone?

    • Phones as extension of self/identity

  • Thesis: Self portrait! Body! Artificial Life!

    • ?????

    • Give a LLM? something? and have it describe myself back to me? And then I draw that?

      • My website, instagram, journal, idk how to do any of this

  • Ask chat-gpt to read my blog and tell me what I’m at school for, what my degree is, what job I can get

[ITP: Programming A2Z] Markov Chains and Context-Free Grammars

Notes

  • A Markov chain is a stochastic describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous even

    • Can simulate “stickiness” by modifying probabilities

  • Grammar = “language of languages”. Provides a framework to describe the syntax and structure of any language

    • Context-free grammar = rules don’t depend on surrounding context

    • Vocab: alphabet, terminal and non-terminal symbols, production rules, axiom

  • n-gram = contiguous sequence of characters or words

    • Used to reconstruct or generate a text that is statistically similar to the original text

    • Frequently used in natural language processing and text analysis

    • Unit of an n-gram is called the level, length is called order

  • A Markov Chain will return less nonsense if the n-gram is longer, output will be closer to the original text

  • Tracery = generative text tool that creates text that is algorithmically combinatorial and surprising while still seeing the authorial “voice” in the finished text

Predictive Text

Thinking about natural language processing and predictive text, one of my friends made me aware of the predictive text feature in iMessage (a setting I didn’t have turned on). Basically, I tapped the middle button over and over again for different text conversations. The generated messages are below, different for different threads. The text to my boyfriend just devolved into “mama” “mommy” over and over. Really weird…

Assignment

It’s that point in the semester where my brain is completely dead and I’m having a hard time thinking of text that I’ve come across that follows a pattern. Yikes! But my last food delivery order of sub-par Pad Thai sparked some inspiration.

Building off of the context-free grammar using Tracery Coding Train example, I created a context-free grammar that would simulate the super-friendly texts I get from DoorDash. Creating this grammar using the Tracery library was really easy to do because it essentially works as a mad-lib and chooses an item randomly from a word bank that I hard-coded. You can find the p5 sketch for my DoorDash CFG here.

Also, Gracy gave me the genius idea to try generating texts from my dad since we compare dad texts sometimes. My dad is … involved to say the least. Generally, I’ll get reminders, motivations and wisdoms, un-solicited “suggestions”, and the occasional dad-in-the-life update with visuals. I started with the Markov Chain example and put in my dad-text data. This was particularly annoying to gather because I’m working on my PC these days so I had to manually select and copy+paste each text and compose emails to myself, etc, etc. Maybe this would’ve been easier using the iMessage app on my MacBook?! Anyway, I just created a new txt file in the p5 sketch with an assortment of my dad’s recent texts to me… and that’s it really! My sketch is here. Not sure how to define it specifically but the generated texts feel pretty authentic to me!

DoorDash texts

Real dad texts

Generated dad texts

You know, I actually really love that I landed on these two sketches because it is only my dad and DoorDash that text me haha! Or maybe that’s just how it feels sometimes!

I realize that these sketches I modified for the assignment are super close to the examples provided and are pretty simplistic. I haven’t had the time to build these examples out fully yet. In the future, I’d love to create more options for the DoorDash CFG. I think it would be kinda funny if I generated texts that sometimes responded back to my dad. Also, I want to build out the UI in p5 to look like the Messages app on my phone so that I could put my generated text in a meaningful visual context.

Resources

Priyanka’s DoorDash CFG sketch

Priyanka’s Dad Text generator sketch

https://github.com/galaxykate/tracery/tree/tracery2

[ITP: Programming A2Z] Chatbots

Bots Thoughts

  • Bots follow the PUDG model = procedural, uncreative, data-driven, graffitti

    • “unoriginal genius” = remixing pre-existing textual artifacts

    • Interventions in a public space, inherently politically progressive

  • “My bot isn’t me but something I’m responsible for”

    • Need to think through all the misuses and consequences

  • Techniques: matching and substitution

    • AIML (Artificial Intelligence Markup Language)

    • Retrieval model = picks a response from a fixed set based on the input and context

    • Generative model = generates new responses from scratch based on Machine Translation techniques

    • RiveScript = simple language with “Unix-like” approach, takes human input and gives an intelligent response

  • Other considerations

    • Conversation length

    • Open vs. closed domain conversation

Assignment

For the bot assignment, I wanted to create a really simple screaming Discord bot, seasonal for spooky season! All this bot will do is scream at you or scream back whatever you tell it (change text to all uppercase letters). To get this going, I just watched some ✨super exclusive✨ ✨unreleased✨ Coding Train video tutorials and I hacked together the bot by referencing these examples.

I started by creating a node project by making a new directory. In the terminal, I navigated to that directory and typed “npm init” to create the “node_modules” file. The Node Package Manager installs packages and dependencies. To work with Discord, I also typed “npm install discord.js” to install the library. Also “npm install dotenv” to store environment variables.

Then I went to discord.com/developers and logged into my account and created a new application. Then I generated the OAuth2 URL and made sure to select “bot” and “applications.commands”. When I followed that link, my bot was added to the Programming A2Z Discord server! Success!

I then created a .env file to hold onto my secret variables which contain my client ID, server ID and bot token. I followed the tutorial video to use the Discord.js library to login to my bot. The green dot shows it is live!

Success!

Then, I built the scream.js file which handles the slash command. Each command will have its own JavaScript file. I also copied the deploy-commands.js file from Dan’s tutorial which reads and registers all the commands. I also finished up the bot.js file to handle the command interaction. In there, I specified if an interaction occurs, that is a “scream” command, then run the scream.js file.

Tada! Voila! It works!

I was totally inspired by Tres’s arguing MCR bots that I created a second bot named scaredy bot! It is up and running but maybe a future Priyanka will finish this project in which the scream machine screams and scaredy bot gets scared and also screams and the two bots just keep on screaming at each other…

You can find my code at this Github repo.

Questions:

When do I need to run deploy-commands.js? Only have to do once. If you change anything in the execute you don’t need to rerun deploy-commands. If you are changing something in the slashcommandBuilder(), then rerun deploy-commands.

Resources

https://github.com/Programming-from-A-to-Z/Discord-Bot-Examples

https://discord.com/developers/applications

https://discordjs.guide/#before-you-begin

https://www.omc-stepperonline.com/nema-17-bipolar-59ncm-84oz-in-2a-42x48mm-4-wires-w-1m-cable-connector-17hs19-2004s1

[ITP: Programming A2Z] Word Counting

Notes

  • “Artisinal data” coined by Sarah Groff-Palermo = small, fragmented, incomplete, human

    • Data to express who we are in the language of today

  • Concordance = list of principal words in a text, listing every instance with its immediate context

  • Sentiment analysis, pronouns hold the key! —> James Pennebaker

  • Associative arrays relates a key with a specific number or value

    • Un-ordered list, dictionary

  • loadStrings() function returns an array where each element is a line from the text

  • TF-IDF (term frequency inverse document frequency) = what words are important to this text versus others

    • score = term frequency * log(total # of docs / # of docs term appears in)

  • Corpus = collection of written texts

Assignment

1. shows initial word counting 2. split tokens by space 3. remove German stop words

I’ve had this question written down in my notebook for the last week: “what are texts that are important/interesting to me?” I didn’t really get a chance to think too much about it before jumping into this assignment. I went to Project Gutenberg and the home page featured Meine Erinnerungen aus Ostafrika by General von Lettow-Vorbeck. German! I can understand that!

I wasn’t so sure about that specific text but I looked at the list of texts in German and I landed on this: Der Bucheinband: Seine Technik und seine Geschichte by Paul Adam, a book about the art and history of book binding from 1890. Pretty cool, I think!

I started by putting the plaint text into a .txt file and removing the weird header and footer license stuff that was in English. I counted the words of this book using the code from the Coding Train tutorial.

In the initial counting I noticed that a lot of single letters were being counted as words… which was strange. When I compared my token array to the text itself I figured out that the special German characters (ö, ä, ß, etc) were causing issues with the token splitting. I changed the split call from non-word characters to splitting by whitespace which made the word list more sensible to me.

When I look at the word list now, at the top are words like der, die, das which all mean “the”. Und is “and”, mit is “with”, in is in, zu is “to”, all words that don’t mean much, right? These are considered stop words which are just commonly used words in a language. We all know that pronouns can be really important, but I wanted to challenge myself to remove them from my word count. Maybe then the word count would be more representative of the content of the text.

The internet is amazing! With a quick search, a found a complete list of German stop words on Github. I uploaded the “plain” list to my p5 sketch and put all the words into an array. Then, I did a quick check before creating a div for the words and their counts to see if that word is on the stop word list. My new word list has a lot of great book-words in it, like leather, fold, cover. Some other stand out words for me are: weise = way, ganz = quite/all, genau = exactly/precisely, gut = good.

This book has really great illustrations and images depicting all the book binding techniques and many beautiful books. Some of my favorites are below:

Since I dabble in illustration too, I thought I could try my hand at visualizing some of the top words from Der Bucheinband:

[ITP: Programming from A2Z] APIs and Libraries

RiTa

I followed this Coding Train tutorial to learn how to use the RiTa.js library. RiTa has tools for natural language processing and generative writing. It seems the video is a bit outdated when it comes to using the current version of RiTa, so here are some things I noted:

  • This is how you include the library: <script src="https://unpkg.com/rita"></script>

  • I did not create an RiString, I used the tokenize() function

  • No Lexicon object, right?! I checked the reference and no Lexicon exists…

I did run into an error following the tutorial when trying to check for plural nouns, or part of speech with the PENN tag of “nns”. I ended up figuring out that the randomWord() function expects JSON notation as the parameter by reading the documentation. So I fixed it, yay!

Dan, stop trying to make fetch happen! (Notes)

Sorry! Jk! I love the fetch() web API … but not actually because I don’t understand it.

  • loadJSON() function is asynchronous

  • fetch() returns a promise

  • Use .then() to deal with asynchronous promises

  • ES8 supports await()

    • If a function returns a promise you can use await()

    • Await() can only be in an ASYNC function!

    • Await() can replace .then() function for promises …. right?!

  • async functions by definition return a promise

  • API = application programming interface, how two applications communicate with each other

Assignment

Ok, so I’m officially SO CONFUSED! I wanted to see if I could get some data from the Datamuse API using async and await() because I just went through all the video examples for week 3 and there was no direct Coding Train example for that. From all the videos I learned that you should only call await on functions that return a promise, right?! I’m still not 100% sure what that is either…

So this is what I wrote first, I modeled it after the code we wrote in the fetch video.

My API call is definitely correct. This is what I get when I put that url in my browser.

Console for aync/await p5 sketch.

The response is really cryptic. It looks like it just returns a bunch of function names or something. Definitely not the JSON response to the Datamuse URL. So what’s all that stuff from lines 8-16? That’s the promise, right? Do I need that stuff?

Then I tried the loadJSON() function. This worked perfectly even though this isn’t how I wanted to retrieve the JSON data.

This response looks much better. This is the data I want to work with.

Then I also saw on the week 3 email from Dan, he attached a code example “Data muse with async and await”. Great, this is exactly what I’m trying to do! So I re-wrote my getDatamuse() function to call the json() function on the fetch response hoping that this would finally unwrap the API data for me. Basically, the console log shows me that data is a Promise and it is perpetually pending, it never resolves. There’s a screen shot below.

AHHHH! I forgot the second awaittttt! For line 25! Now I’m getting my data from the API.

Alrighty, here’s what I made for my assignment! I even tried to load the image asynchronously by calling the fetch() function, but this did not work for drawing on the canvas. I had to use the preload() function like normal. Also, in my sketch I call my asynchronous function in draw(). I was trying to draw the words to the canvas that the async function returned in draw() itself but I quickly learned that wouldn’t work. The text with the retrieved word from the API needs to be drawn to canvas in the async function itself and I lowered the frame rate of the sketch as well.

Some high art I made!

Q: Why use fetch and await instead of loadJSON?

References

All my p5.js sketches live here ❤️✨

https://github.com/dhowe/ritajs/tree/master

https://rednoise.org/rita/#reference

https://www.youtube.com/watch?v=lIPEvh8HbGQ

[ITP: Programming A2Z] Regular Expressions

DOM and ES6 Review

  • Arrow functions, “=>”, are a shorthand function notation in ES6

    • Works for anonymous functions which are unnamed

  • Switched from “var” to “let” in ES6 too

  • “this” refers to the current context of where you are in the code

  • Callback functions are synchronous only in JS

  • Promises are not supported in p5.js yet but you can use the fetch() function

    • There are three states: pending, fulfilled, and rejected

    • Fetch() handles asynchronous events using then() and catch() events

  • Made my own word interactor! Could do some interesting things with this in the future…

RegEx Notes

  • RegEx are inside \ ___\

  • meta-characters: “\d” = 0-9, “\w” = A-Z or a-z or 0-9, “\s” = whitespace, “.” = any character

    • Capital letter means opposite or NOT

  • Quantifiers: “*” = 0 or more, “+” = 1 or more, “?” = 0 or 1, “{min, max}” = range, “{n}”

  • Position: “^” = beginning of line, “$” = end of line, “\b” = word boundary

  • Character class [], OR

  • Capturing groups with ()

    • Group 0 is the full regular expression result

    • “$1” and “\1” (back reference) refers to group 1

  • JavaScript functions to use RegEx: test(), match(), exec(), split(), replace()

    • Use flags g = global to return every match to the RegEx or i = case insensitive matches

Assignment - MadLibs Generator

Here is my MadLibs generator and I followed this Coding Challenge.

So as I was following this challenge I saw that the Tabletop.js library is deprecated since 2020 so I transitioned to using something called Papa Parse 5 which converts CSV to JSON.

Here are some other important notes from following the challenge:

  • Need to “escape” the dollar signs in the RegEx because they are meta-characters

  • Whatever you return in the replacer function is what the expression results will be replaced with

  • Global flag!!

  • We kind of touched on this in class but entry.noun = entry[‘noun’]

Now that I’ve completed the MadLib generator, I’ve actually noticed that sometimes a MadLib is generated with all “undefined”’s. My gut instinct is that that would come from the first line of the .csv file but I think that’s accounted for in the Papa Parse declaration with “header: true”. Actually I just realized the “undefined”’s come when I try to generate a MadLib before the file is loaded! Duh!

Another question I have is I guess I am not sure how the "replacer” function really works. Like how does it know the part of speech that corresponds to the matched regular expression?

References

https://thecodingtrain.com/challenges/38-word-interactor

https://www.youtube.com/watch?v=7DG3kCDx53c&list=PLRqwX-V7Uu6YEypLuls7iidwHMdCM6o2w

https://www.papaparse.com/docs

[ITP: Programming A2Z] Constrained/Algorithmic Writing

Constrained Writing Techniques

Constrained writing is a technique in which the writer is bound by some condition that forbids certain things or imposes a patter. The image on the right defines some techniques we learned in class. On top of that we also covered, my personal favorite, the N+7 method developed by OULIPO in which the nouns in a poem are replaced with the noun appearing seven nouns away in the dictionary. There’s also the “cut up method” by William Burroughs, erasure poetry, and the “diastic technique” by Jackson Mac Low.

DOM (Document Object Model) Takeaways

  • Need to decide whether you want to work with HTML objects in the index.html file or the p5 version - no right answer really, different for each case

  • HTML elements hold a static state that you can change by calling one of their methods

  • p5 elements have a wrapper around an HTML element giving only simplified access to the element itself

  • You can create HTML elements within a p5 sketch using createElement() function

  • Other helpful functions: .class(), .id(), .removeElement(),

p5js and text Takeaways

  • Helpful functions

    • s.value(), s.indexOf(), s.substring()

    • s.split() = JavaScript native function that splits a string into items in an array at a delimiter “tokens”

    • p5: loadStrings(), drop()

Assignment

I found this really cool sci-fi short story on, well, … LinkedIn🤢. Indra’s Web, by Vandana Singh, is a story about Mahua and how she transformed the once-declining city of Ashapur into a prosperous, technological city by employing biophillic design and emergent AI. I thought it could be cool to physically try out some of these techniques on this story.

Cut up method

Alright, I am not a very good computer! I don’t know how many times I read the steps to the cut up method but I did it wrong in my first attempt. The third pic below shows how I first organized a column of the story and I realized it wasn’t all that random because I was just reversing the order of pieces of the same paragraph.

Uh oh I messed up

This is my new text created by using the cut up method

Some selected excerpts:

she had always fear with wide, frightened eyes

sometimes she lies before the screens

holds her hand, but at other her. “Look,” she said trying to tell Mahua something. Everything according to connections with a skill that come up with some

Mahua, was named by their abrupt pauses and she was born under some hidden significance. Two of them—mother that the ants followed invisible from their village in—that the world was full of up in the slums, where like the electric wires eleven, only three months above the tenements.

surrounded her, pulled her say, now looks at Mahua their arguments before can’t speak but she can last. “You know how to check peacefully while the protocols. Do that.”

Make sure try. There’s hope. Working and just wait

tree on the way to Delhi. The such a hurry, and whether grandmother—had migrated frantically waving antennae after her father died.

(Random) Erasure Poetry

I really loved the last paragraph of this story. I wasn’t sure how I wanted to choose what to erase and what not to and I also really liked the random nature of the cut up method, so I created a simple p5 sketch that generated a random number (1-7) which I used to decide what words to erase and which to keep. The results are below. In hindsight, I probably shouldn’t have erased randomly because the end result, though it may have started out good, has little meaning.

pre-erasure

post-erasure

Readings

Stephanie Dinkins Eyeo Talk

  • “Not the Only One” created in direct opposition to BINA48

  • Oral history of her family, memory as an act of resistance

  • Eight Aboriginal Ways of Learning

  • Building small datasets that are more representative of people

  • “An algorithm is something you feed historic information to predict something” - Cathy O’Neill

  • What do machine learning systems created by and for specific communities look like?

  • Meet the technology where it actually is without expectations or assumptions of perfection

  • Demand equity, accountability, transparency, and inclusivity from AI

Resources

Cut up method

Stephanie Dinkins Eyeo Talk